43 research outputs found
Online and Stochastic Gradient Methods for Non-decomposable Loss Functions
Modern applications in sensitive domains such as biometrics and medicine
frequently require the use of non-decomposable loss functions such as
precision@k, F-measure etc. Compared to point loss functions such as
hinge-loss, these offer much more fine grained control over prediction, but at
the same time present novel challenges in terms of algorithm design and
analysis. In this work we initiate a study of online learning techniques for
such non-decomposable loss functions with an aim to enable incremental learning
as well as design scalable solvers for batch problems. To this end, we propose
an online learning framework for such loss functions. Our model enjoys several
nice properties, chief amongst them being the existence of efficient online
learning algorithms with sublinear regret and online to batch conversion
bounds. Our model is a provable extension of existing online learning models
for point loss functions. We instantiate two popular losses, prec@k and pAUC,
in our model and prove sublinear regret bounds for both of them. Our proofs
require a novel structural lemma over ranked lists which may be of independent
interest. We then develop scalable stochastic gradient descent solvers for
non-decomposable loss functions. We show that for a large family of loss
functions satisfying a certain uniform convergence property (that includes
prec@k, pAUC, and F-measure), our methods provably converge to the empirical
risk minimizer. Such uniform convergence results were not known for these
losses and we establish these using novel proof techniques. We then use
extensive experimentation on real life and benchmark datasets to establish that
our method can be orders of magnitude faster than a recently proposed cutting
plane method.Comment: 25 pages, 3 figures, To appear in the proceedings of the 28th Annual
Conference on Neural Information Processing Systems, NIPS 201
Surrogate Functions for Maximizing Precision at the Top
The problem of maximizing precision at the top of a ranked list, often dubbed
Precision@k (prec@k), finds relevance in myriad learning applications such as
ranking, multi-label classification, and learning with severe label imbalance.
However, despite its popularity, there exist significant gaps in our
understanding of this problem and its associated performance measure.
The most notable of these is the lack of a convex upper bounding surrogate
for prec@k. We also lack scalable perceptron and stochastic gradient descent
algorithms for optimizing this performance measure. In this paper we make key
contributions in these directions. At the heart of our results is a family of
truly upper bounding surrogates for prec@k. These surrogates are motivated in a
principled manner and enjoy attractive properties such as consistency to prec@k
under various natural margin/noise conditions.
These surrogates are then used to design a class of novel perceptron
algorithms for optimizing prec@k with provable mistake bounds. We also devise
scalable stochastic gradient descent style methods for this problem with
provable convergence bounds. Our proofs rely on novel uniform convergence
bounds which require an in-depth analysis of the structural properties of
prec@k and its surrogates. We conclude with experimental results comparing our
algorithms with state-of-the-art cutting plane and stochastic gradient
algorithms for maximizing [email protected]: To appear in the the proceedings of the 32nd International Conference
on Machine Learning (ICML 2015
Plugin estimators for selective classification with out-of-distribution detection
Real-world classifiers can benefit from the option of abstaining from
predicting on samples where they have low confidence. Such abstention is
particularly useful on samples which are close to the learned decision
boundary, or which are outliers with respect to the training sample. These
settings have been the subject of extensive but disjoint study in the selective
classification (SC) and out-of-distribution (OOD) detection literature. Recent
work on selective classification with OOD detection (SCOD) has argued for the
unified study of these problems; however, the formal underpinnings of this
problem are still nascent, and existing techniques are heuristic in nature. In
this paper, we propose new plugin estimators for SCOD that are theoretically
grounded, effective, and generalise existing approaches from the SC and OOD
detection literature. In the course of our analysis, we formally explicate how
na\"{i}ve use of existing SC and OOD detection baselines may be inadequate for
SCOD. We empirically demonstrate that our approaches yields competitive SC and
OOD detection performance compared to baselines from both literatures